Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 9221, 2024 04 22.
Artículo en Inglés | MEDLINE | ID: mdl-38649681

RESUMEN

Technological advances in head-mounted displays (HMDs) facilitate the acquisition of physiological data of the user, such as gaze, pupil size, or heart rate. Still, interactions with such systems can be prone to errors, including unintended behavior or unexpected changes in the presented virtual environments. In this study, we investigated if multimodal physiological data can be used to decode error processing, which has been studied, to date, with brain signals only. We examined the feasibility of decoding errors solely with pupil size data and proposed a hybrid decoding approach combining electroencephalographic (EEG) and pupillometric signals. Moreover, we analyzed if hybrid approaches can improve existing EEG-based classification approaches and focused on setups that offer increased usability for practical applications, such as the presented game-like virtual reality flight simulation. Our results indicate that classifiers trained with pupil size data can decode errors above chance. Moreover, hybrid approaches yielded improved performance compared to EEG-based decoders in setups with a reduced number of channels, which is crucial for many out-of-the-lab scenarios. These findings contribute to the development of hybrid brain-computer interfaces, particularly in combination with wearable devices, which allow for easy acquisition of additional physiological data.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Pupila , Realidad Virtual , Humanos , Electroencefalografía/métodos , Adulto , Masculino , Pupila/fisiología , Femenino , Adulto Joven , Simulación por Computador , Encéfalo/fisiología , Frecuencia Cardíaca/fisiología
2.
Artículo en Inglés | MEDLINE | ID: mdl-38083691

RESUMEN

Algorithms detecting erroneous events, as used in brain-computer interfaces, usually rely solely on neural correlates of error perception. The increasing availability of wearable displays with built-in pupillometric sensors enables access to additional physiological data, potentially improving error detection. Hence, we measured both electroencephalographic (EEG) and pupillometric signals of 19 participants while performing a navigation task in an immersive virtual reality (VR) setting. We found EEG and pupillometric correlates of error perception and significant differences between distinct error types. Further, we found that actively performing tasks delays error perception. We believe that the results of this work could contribute to improving error detection, which has rarely been studied in the context of immersive VR.


Asunto(s)
Interfaces Cerebro-Computador , Realidad Virtual , Humanos , Simulación por Computador , Electroencefalografía , Percepción
3.
IEEE Trans Haptics ; 15(1): 103-108, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34962880

RESUMEN

Vibrotactile skin-reading effectively conveys rich information via vibrotactile patterns, which has gained attention due to recent advancements. However, training to recognize and associate vibrotactile patterns with their meaning is time-consuming and tedious. The conventional training methods use repetitive exposure of the vibrotactile stimuli along with visual and auditory cues of the corresponding symbol. This work proposes a novel visual-based training method to teach users the associations between semantic information and vibrotactile patterns. Our proposed visual explanation training is compared with the conventional training method in a study with 18 participants. Results show that participants achieve a better performance using the new visual explanation training when identifying single English alphabet characters. Moreover, the proposed training also incurred a significantly lower workload (NASA TLX) and was preferred by study participants. The proposed method is thus effective and offers a less stressful form of training users for skin reading.


Asunto(s)
Fenómenos Fisiológicos de la Piel , Vibración , Atención , Señales (Psicología) , Humanos , Piel , Tacto
4.
PLoS One ; 16(2): e0245320, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33534848

RESUMEN

Motorsports have become an excellent playground for testing the limits of technology, machines, and human drivers. This paper presents a study that used a professional racing simulator to compare the behavior of human and autonomous drivers under an aggressive driving scenario. A professional simulator offers a close-to-real emulation of underlying physics and vehicle dynamics, as well as a wealth of clean telemetry data. In the first study, the participants' task was to achieve the fastest lap while keeping the car on the track. We grouped the resulting laps according to the performance (lap-time), defining driving behaviors at various performance levels. An extensive analysis of vehicle control features obtained from telemetry data was performed with the goal of predicting the driving performance and informing an autonomous system. In the second part of the study, a state-of-the-art reinforcement learning (RL) algorithm was trained to control the brake, throttle and steering of the simulated racing car. We investigated how the features used to predict driving performance in humans can be used in autonomous driving. Our study investigates human driving patterns with the goal of finding traces that could improve the performance of RL approaches. Conversely, they can also be applied to training (professional) drivers to improve their racing line.


Asunto(s)
Accidentes de Tránsito/psicología , Conducción Agresiva/psicología , Tiempo de Reacción/fisiología , Deportes/psicología , Humanos , Entrenamiento Simulado
5.
IEEE Trans Vis Comput Graph ; 18(4): 565-72, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22402683

RESUMEN

In this paper, we explore techniques that aim to improve site understanding for outdoor Augmented Reality (AR) applications. While the first person perspective in AR is a direct way of filtering and zooming on a portion of the data set, it severely narrows overview of the situation, particularly over large areas. We present two interactive techniques to overcome this problem: multi-view AR and variable perspective view. We describe in details the conceptual, visualization and interaction aspects of these techniques and their evaluation through a comparative user study. The results we have obtained strengthen the validity of our approach and the applicability of our methods to a large range of application domains.


Asunto(s)
Interfaz Usuario-Computador , Adulto , Gráficos por Computador , Ambiente , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...